130 research outputs found
Learning from Ontology Streams with Semantic Concept Drift
Data stream learning has been largely studied for extracting knowledge
structures from continuous and rapid data records. In the semantic Web, data is
interpreted in ontologies and its ordered sequence is represented as an
ontology stream. Our work exploits the semantics of such streams to tackle the
problem of concept drift i.e., unexpected changes in data distribution, causing
most of models to be less accurate as time passes. To this end we revisited (i)
semantic inference in the context of supervised stream learning, and (ii)
models with semantic embeddings. The experiments show accurate prediction with
data from Dublin and Beijing
Knowledge-based Transfer Learning Explanation
Machine learning explanation can significantly boost machine learning's
application in decision making, but the usability of current methods is limited
in human-centric explanation, especially for transfer learning, an important
machine learning branch that aims at utilizing knowledge from one learning
domain (i.e., a pair of dataset and prediction task) to enhance prediction
model training in another learning domain. In this paper, we propose an
ontology-based approach for human-centric explanation of transfer learning.
Three kinds of knowledge-based explanatory evidence, with different
granularities, including general factors, particular narrators and core
contexts are first proposed and then inferred with both local ontologies and
external knowledge bases. The evaluation with US flight data and DBpedia has
presented their confidence and availability in explaining the transferability
of feature representation in flight departure delay forecasting.Comment: Accepted by International Conference on Principles of Knowledge
Representation and Reasoning, 201
Low-resource Personal Attribute Prediction from Conversation
Personal knowledge bases (PKBs) are crucial for a broad range of applications
such as personalized recommendation and Web-based chatbots. A critical
challenge to build PKBs is extracting personal attribute knowledge from users'
conversation data. Given some users of a conversational system, a personal
attribute and these users' utterances, our goal is to predict the ranking of
the given personal attribute values for each user. Previous studies often rely
on a relative number of resources such as labeled utterances and external data,
yet the attribute knowledge embedded in unlabeled utterances is underutilized
and their performance of predicting some difficult personal attributes is still
unsatisfactory. In addition, it is found that some text classification methods
could be employed to resolve this task directly. However, they also perform not
well over those difficult personal attributes. In this paper, we propose a
novel framework PEARL to predict personal attributes from conversations by
leveraging the abundant personal attribute knowledge from utterances under a
low-resource setting in which no labeled utterances or external data are
utilized. PEARL combines the biterm semantic information with the word
co-occurrence information seamlessly via employing the updated prior attribute
knowledge to refine the biterm topic model's Gibbs sampling process in an
iterative manner. The extensive experimental results show that PEARL
outperforms all the baseline methods not only on the task of personal attribute
prediction from conversations over two data sets, but also on the more general
weakly supervised text classification task over one data set.Comment: Accepted by AAAI'2
Low power predictable memory and processing architectures
Great demand in power optimized devices shows promising economic potential and draws lots of attention in industry and research area. Due to the continuously shrinking CMOS process, not only dynamic power but also static power has emerged as a big concern in power reduction. Other than power optimization, average-case power estimation is quite significant for power budget allocation but also challenging in terms of time and effort. In this thesis, we will introduce a methodology to support modular quantitative analysis in order to estimate average power of circuits, on the basis of two concepts named Random Bag Preserving and Linear Compositionality. It can shorten simulation time and sustain high accuracy, resulting in increasing the feasibility of power estimation of big systems. For power saving, firstly, we take advantages of the low power characteristic of adiabatic logic and asynchronous logic to achieve ultra-low dynamic and static power. We will propose two memory cells, which could run in adiabatic and non-adiabatic mode. About 90% dynamic power can be saved in adiabatic mode when compared to other up-to-date designs. About 90% leakage power is saved. Secondly, a novel logic, named Asynchronous Charge Sharing Logic (ACSL), will be introduced. The realization of completion detection is simplified considerably. Not just the power reduction improvement, ACSL brings another promising feature in average power estimation called data-independency where this characteristic would make power estimation effortless and be meaningful for modular quantitative average case analysis. Finally, a new asynchronous Arithmetic Logic Unit (ALU) with a ripple carry adder implemented using the logically reversible/bidirectional characteristic exhibiting ultra-low power dissipation with sub-threshold region operating point will be presented. The proposed adder is able to operate multi-functionally
BoxEL: Concept and Role Box Embeddings for the Description Logic EL++
Description logic (DL) ontologies extend knowledge graphs (KGs) with
conceptual information and logical background knowledge. In recent years, there
has been growing interest in inductive reasoning techniques for such
ontologies, which promise to complement classical deductive reasoning
algorithms. Similar to KG completion, several existing approaches learn
ontology embeddings in a latent space, while additionally ensuring that they
faithfully capture the logical semantics of the underlying DL. However, they
suffer from several shortcomings, mainly due to a limiting role representation.
We propose BoxEL, which represents both concepts and roles as boxes (i.e.,
axis-aligned hyperrectangles) and demonstrate how it overcomes the limitations
of previous methods. We theoretically prove the soundness of our model and
conduct an extensive experimental evaluation, achieving state-of-the-art
results across a variety of datasets. As part of our evaluation, we introduce a
novel benchmark for subsumption prediction involving both atomic and complex
concepts
Dual box embeddings for the description logic EL++
OWL ontologies, whose formal semantics are rooted in Description Logic (DL), have been widely used for knowledge representation. Similar to Knowledge Graphs (KGs), ontologies are often incomplete, and maintaining and constructing them has proved challenging. While classical deductive reasoning algorithms use the precise formal semantics of an ontology to predict missing facts, recent years have witnessed growing interest in inductive reasoning techniques that can derive probable facts from an ontology. Similar to KGs, a promising approach is to learn ontology embeddings in a latent vector space, while additionally ensuring they adhere to the semantics of the underlying DL. While a variety of approaches have been proposed, current ontology embedding methods suffer from several shortcomings, especially that they all fail to faithfully model one-to-many, many-to-one, and many-to-many relations and role inclusion axioms. To address this problem and improve ontology completion performance, we propose a novel ontology embedding method named Box2EL for the DL EL++, which represents both concepts and roles as boxes (i.e., axis-aligned hyperrectangles), and models inter-concept relationships using a bumping mechanism. We theoretically prove the soundness of Box2EL and conduct an extensive experimental evaluation, achieving state-of-the-art results across a variety of datasets on the tasks of subsumption prediction, role assertion prediction, and approximating deductive reasoning
- …